26 research outputs found
Universal and Robust Distributed Network Codes
Random linear network codes can be designed and implemented in a distributed
manner, with low computational complexity. However, these codes are classically
implemented over finite fields whose size depends on some global network
parameters (size of the network, the number of sinks) that may not be known
prior to code design. Also, if new nodes join the entire network code may have
to be redesigned.
In this work, we present the first universal and robust distributed linear
network coding schemes. Our schemes are universal since they are independent of
all network parameters. They are robust since if nodes join or leave, the
remaining nodes do not need to change their coding operations and the receivers
can still decode. They are distributed since nodes need only have topological
information about the part of the network upstream of them, which can be
naturally streamed as part of the communication protocol.
We present both probabilistic and deterministic schemes that are all
asymptotically rate-optimal in the coding block-length, and have guarantees of
correctness. Our probabilistic designs are computationally efficient, with
order-optimal complexity. Our deterministic designs guarantee zero error
decoding, albeit via codes with high computational complexity in general. Our
coding schemes are based on network codes over ``scalable fields". Instead of
choosing coding coefficients from one field at every node, each node uses
linear coding operations over an ``effective field-size" that depends on the
node's distance from the source node. The analysis of our schemes requires
technical tools that may be of independent interest. In particular, we
generalize the Schwartz-Zippel lemma by proving a non-uniform version, wherein
variables are chosen from sets of possibly different sizes. We also provide a
novel robust distributed algorithm to assign unique IDs to network nodes.Comment: 12 pages, 7 figures, 1 table, under submission to INFOCOM 201
On combining information-theoretic and cryptographic approaches to network coding security against the pollution attack
In this paper we consider the pollution attack in network coded systems where network nodes are computationally limited. We consider the combined use of cryptographic signature based security and information theoretic network error correction and propose a fountain-like network error correction code construction suitable for this purpose
StyleTime: Style Transfer for Synthetic Time Series Generation
Neural style transfer is a powerful computer vision technique that can
incorporate the artistic "style" of one image to the "content" of another. The
underlying theory behind the approach relies on the assumption that the style
of an image is represented by the Gram matrix of its features, which is
typically extracted from pre-trained convolutional neural networks (e.g.,
VGG-19). This idea does not straightforwardly extend to time series stylization
since notions of style for two-dimensional images are not analogous to notions
of style for one-dimensional time series. In this work, a novel formulation of
time series style transfer is proposed for the purpose of synthetic data
generation and enhancement. We introduce the concept of stylized features for
time series, which is directly related to the time series realism properties,
and propose a novel stylization algorithm, called StyleTime, that uses explicit
feature extraction techniques to combine the underlying content (trend) of one
time series with the style (distributional properties) of another. Further, we
discuss evaluation metrics, and compare our work to existing state-of-the-art
time series generation and augmentation schemes. To validate the effectiveness
of our methods, we use stylized synthetic data as a means for data augmentation
to improve the performance of recurrent neural network models on several
forecasting tasks
Network Coding for Error Correction
In this thesis, network error correction is considered from both theoretical and practical viewpoints. Theoretical parameters such as network structure and type of connection (multicast vs. nonmulticast) have a profound effect on network error correction capability. This work is also dictated by the practical network issues that arise in wireless ad-hoc networks, networks with limited computational power (e.g., sensor networks) and real-time data streaming systems (e.g., video/audio conferencing or media streaming).
Firstly, multicast network scenarios with probabilistic error and erasure occurrence are considered. In particular, it is shown that in networks with both random packet erasures and errors, increasing the relative occurrence of erasures compared to errors favors network coding over forwarding at network nodes, and vice versa. Also, fountain-like error-correcting codes, for which redundancy is incrementally added until decoding succeeds, are constructed. These codes are appropriate for use in scenarios where the upper bound on the number of errors is unknown a priori.
Secondly, network error correction in multisource multicast and nonmulticast network scenarios is discussed. Capacity regions for multisource multicast network error correction with both known and unknown topologies (coherent and noncoherent network coding) are derived. Several approaches to lower- and upper-bounding error-correction capacity regions of general nonmulticast networks are given. For 3-layer two-sink and nested-demand nonmulticast network topologies some of the given lower and upper bounds match. For these network topologies, code constructions that employ only intrasession coding are designed. These designs can be applied to streaming erasure correction code constructions.</p
K-SHAP: Policy Clustering Algorithm for Anonymous State-Action Pairs
Learning agent behaviors from observational data has shown to improve our
understanding of their decision-making processes, advancing our ability to
explain their interactions with the environment and other agents. While
multiple learning techniques have been proposed in the literature, there is one
particular setting that has not been explored yet: multi agent systems where
agent identities remain anonymous. For instance, in financial markets labeled
data that identifies market participant strategies is typically proprietary,
and only the anonymous state-action pairs that result from the interaction of
multiple market participants are publicly available. As a result, sequences of
agent actions are not observable, restricting the applicability of existing
work. In this paper, we propose a Policy Clustering algorithm, called K-SHAP,
that learns to group anonymous state-action pairs according to the agent
policies. We frame the problem as an Imitation Learning (IL) task, and we learn
a world-policy able to mimic all the agent behaviors upon different
environmental states. We leverage the world-policy to explain each anonymous
observation through an additive feature attribution method called SHAP (SHapley
Additive exPlanations). Finally, by clustering the explanations we show that we
are able to identify different agent policies and group observations
accordingly. We evaluate our approach on simulated synthetic market data and a
real-world financial dataset. We show that our proposal significantly and
consistently outperforms the existing methods, identifying different agent
strategies.Comment: ICML 202
Outer bounds on the error correction capacity region for non-multicast networks
In this paper we study the capacity regions of non-multicast networks that are susceptible to adversarial errors. We derive outer bounds on the error correction capacity region and give a family of single- and two-source two-sink 3-layer networks for which these bounds are tight
Rate regions for coherent and noncoherent multisource network error correction
In this paper we derive capacity regions for network error correction with both known and unknown topologies (coherent and non-coherent network coding) under a multiple-source multicast transmission scenario. For the multiple-source non-multicast scenario, given any achievable network code for the error-free case, we construct a code with a reduced rate region for the case with errors
Equitable Marketplace Mechanism Design
We consider a trading marketplace that is populated by traders with diverse
trading strategies and objectives. The marketplace allows the suppliers to list
their goods and facilitates matching between buyers and sellers. In return,
such a marketplace typically charges fees for facilitating trade. The goal of
this work is to design a dynamic fee schedule for the marketplace that is
equitable and profitable to all traders while being profitable to the
marketplace at the same time (from charging fees). Since the traders adapt
their strategies to the fee schedule, we present a reinforcement learning
framework for simultaneously learning a marketplace fee schedule and trading
strategies that adapt to this fee schedule using a weighted optimization
objective of profits and equitability. We illustrate the use of the proposed
approach in detail on a simulated stock exchange with different types of
investors, specifically market makers and consumer investors. As we vary the
equitability weights across different investor classes, we see that the learnt
exchange fee schedule starts favoring the class of investors with the highest
weight. We further discuss the observed insights from the simulated stock
exchange in light of the general framework of equitable marketplace mechanism
design
ATMS: Algorithmic Trading-Guided Market Simulation
The effective construction of an Algorithmic Trading (AT) strategy often
relies on market simulators, which remains challenging due to existing methods'
inability to adapt to the sequential and dynamic nature of trading activities.
This work fills this gap by proposing a metric to quantify market discrepancy.
This metric measures the difference between a causal effect from underlying
market unique characteristics and it is evaluated through the interaction
between the AT agent and the market. Most importantly, we introduce Algorithmic
Trading-guided Market Simulation (ATMS) by optimizing our proposed metric.
Inspired by SeqGAN, ATMS formulates the simulator as a stochastic policy in
reinforcement learning (RL) to account for the sequential nature of trading.
Moreover, ATMS utilizes the policy gradient update to bypass differentiating
the proposed metric, which involves non-differentiable operations such as order
deletion from the market. Through extensive experiments on semi-real market
data, we demonstrate the effectiveness of our metric and show that ATMS
generates market data with improved similarity to reality compared to the
state-of-the-art conditional Wasserstein Generative Adversarial Network (cWGAN)
approach. Furthermore, ATMS produces market data with more balanced BUY and
SELL volumes, mitigating the bias of the cWGAN baseline approach, where a
simple strategy can exploit the BUY/SELL imbalance for profit